Refine your search
Collections
Co-Authors
Year
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z All
Verma, Amit
- Inconsistency Detection in Software Component Source Code using Ant Colony Optimization and Neural Network Algorithm
Abstract Views :137 |
PDF Views:0
Authors
Affiliations
1 Computer Science and Engineering, Chandigarh Engineering College, Landran, Mohali – 140307, Punjab, IN
1 Computer Science and Engineering, Chandigarh Engineering College, Landran, Mohali – 140307, Punjab, IN
Source
Indian Journal of Science and Technology, Vol 9, No 40 (2016), Pagination:Abstract
Objectives: Inconsistency detection is one of the major challenges in source code for the software developers. There is the need of consistent identifiers to reduce the code inconsistencies. So, developers should either have the knowledge to create conceptual identifiers or the knowledge to detect the inconsistencies in source code. Methods/Statistical Analysis: There is the availability of a list of tools for the detection of different types of inconsistencies. But the existing tools are not much appropriate for Semantic, Syntactic Inconsistencies and Part of Speech tagging. Findings: In the paper, an autonomous tool Automatic Bad Code Detector (ABCD) is developed to detect semantic, syntactic and part of speech inconsistency in the source code. ABCD tool identifies the inconsistencies in the source code based on the detected Code Clones. These Clones are detected by matching the test code with Code Repository. A java project based code repository is considered for experimentation. ABCD is evaluated for different java projects in order to find inconsistencies in source code. In ABCD tool main inconsistency detector are Ant Colony Optimization and Neural Network Back Propagation algorithm. Further, ABCD is useful in re-implementing the new versions of the java code. Applications/Improvements: The current concept is evaluated for the Semantic, Syntactic, POS-Word and POS-Phrase inconsistencies based on evaluation parameter of precision. The efficiency of ABCD is evaluated as an overall value for the precision, recall and f-measure.Keywords
Ant Colony Optimization, Automatic Bad Code Detector, Code Repository, Inconsistency Detection, Neural Network Back Propagation Algorithm.- Analysis and Implementation of Data Mining Algorithms for Deploying ID3, CHAID and Naive Bayes for Random Dataset
Abstract Views :181 |
PDF Views:0
Authors
Affiliations
1 Department of Computer Science and Engineering, CGC Landran, Mohali - 140307, Punjab, IN
2 Department of Computer Science and Engineering, Chandigarh University, Mohali - 140413, Punjab, IN
1 Department of Computer Science and Engineering, CGC Landran, Mohali - 140307, Punjab, IN
2 Department of Computer Science and Engineering, Chandigarh University, Mohali - 140413, Punjab, IN
Source
Indian Journal of Science and Technology, Vol 9, No 40 (2016), Pagination:Abstract
Objectives: Effective data processing for fast retrieval of information has become a burning issue. A modern document contains not only text but images, video, audio as well. In this paper, a brief history of storage devices from the Vedic period to the world of digitization with some important inventions has been presented. Method/Statistical Analysis: It also includes the discussion on how data is transformed for the decision making process along with preprocessing techniques. Findings: A comparative analysis has been done of various techniques, their specific algorithms, uses, pros, limitations and applications where these can be implemented. It helps to give us an insight about these techniques. Finally experimental results on three different algorithms (Id3, CHAID, Naive Bayes) using Rapid Miner have been evaluated to compare their performance based on three parameters (accuracy, precision and recall). Applications/Improvements: The empirical results show the ID3 as more accurate than others with 95.95% accuracy while CHAID shows 89.11% and Naive Bayes classified 81.77% data accurately. *Keywords
CHAID, Development of Storage Devices, ID3, Information Retrieval, Rapid Miner, Retrieval Techniques, Visualization.- Latent Fingerprint Recognition using Hybridization Approach of Partial Differential Equation and Exemplar Inpainting
Abstract Views :162 |
PDF Views:0
Authors
Affiliations
1 Computer Science and Engineering, Chandigarh Engineering College, Landran, Mohali – 140307, Punjab, IN
1 Computer Science and Engineering, Chandigarh Engineering College, Landran, Mohali – 140307, Punjab, IN
Source
Indian Journal of Science and Technology, Vol 9, No 45 (2016), Pagination:Abstract
Objectives: Biometric based Fingerprint recognition system is one of the well-adapted approaches in online security prospects. However, due to user-friendly behaviour of advanced computing systems, biometric approach is not much in use now days. The approach left its footprints only with the applications to identify criminal activities at crime scenes where fingerprints are mainly available in latent form. Latent fingerprints are the accidently left finger skin impressions by criminals. These impressions are invisible for the naked human eye and usually captured with lasers, chemical, powders etc. These captured latent fingerprints carries less minutiae information with distorted ridges and high level of pattern overlapping. So, it is not easy to identify the criminals with partial fingerprint information. Methods/Statistical Analysis: In this paper, hybrid approach of Exemplar In painting and Partial Differential Equation is used to fill up the distorted ridges. The main goal of this work is to present the framework to reconstruct the latent distorted fingerprint and further use them to find the best match for those enhanced reconstructed latent fingerprints. For the experimentation of proposed hybrid concept, IIIT Delhi latent and NIST SD-27 databases of fingerprint are used. In this experimentation, different fingerprint enhancement filters like Canny Edge Detection Filter, Prewitt filter, Laplacian Filter, Sobel Filter, Gaussian Low Pass Filter, Gaussian High Pass Filter are used. Findings: Different filters show different performance of dataset images for latent fingerprint recognition. The overall performance of the automated latent fingerprint identification approach is analysed in terms of false acceptance rate and genuine acceptance rate. Application/Improvements: In this way, latent fingerprint can be used for the recognition of criminal activities at crime scenes where fingerprints are mainly available in latent form. The overall concept shows better results for canny filter as compare to other considered filters in enhancement.Keywords
Binarization Approach, Criminal Activities, Exemplar Inpainting, Latent Fingerprint, Minutiae Extraction, Partial Differential Equation.- Comparative Analysis of Information Extraction Techniques for Data Mining
Abstract Views :181 |
PDF Views:0
Authors
Affiliations
1 Department of Computer Science and Engineering, CGC Landran, Mohali - 140307, Punjab, IN
2 Department of Computer Science and Engineering, Chandigarh University, Mohali - 140413, Punjab, IN
1 Department of Computer Science and Engineering, CGC Landran, Mohali - 140307, Punjab, IN
2 Department of Computer Science and Engineering, Chandigarh University, Mohali - 140413, Punjab, IN
Source
Indian Journal of Science and Technology, Vol 9, No 11 (2016), Pagination:Abstract
Background/Objectives: This paper emphasizes the evolution of data processing adroitness to advanced data processing taxonomy from Mesolithic to recent years and a comparative study of prevailing tools/techniques which are useful for mainly the analysis of the bulky data. Methods/Statistical Analysis: There are various kinds of methods adapted by researchers for analysis of large amount of data. Each method varies on the basis of their different parameters and datasets according to their needs. These methods are implemented on HDFS, Mapreduce and Hadoop environment with integration of R tool. Some Methods are enhanced by the sentimental analysis through NLP which increase the performance of density analysis. Findings: The data or associated facts have been in existence right with the birth of human species. It commenced with manual illustration and gradually advanced through current state-of the art storage and processing. Big data involves novel techniques to manage information within limited run time. Big data is acutely beneficial in ventures growth, society incumbency and scientific research. The paper provides an overview of state of the art and focuses on the usage of conventional tools as well as advanced tools and techniques for effective information extraction. Applications/Improvements: To handle this prodigious data, there is a need to upgrade from the traditional data filtering techniques and adopt the new big data diagnostic tools.Keywords
Big Data, Data Analysis, Data Mining, Evolution, Techniques, Tools- Comparative Analysis of Data Mining Tools and Techniques for Information Retrieval
Abstract Views :182 |
PDF Views:0
Authors
Affiliations
1 Department of Computer Science and Engineering, Chandigarh University, Mohali - 140413, Punjab, IN
2 Department of Computer Science and Engineering, Chandigarh Engineering College, Mohali - 140307, Punjab, IN
1 Department of Computer Science and Engineering, Chandigarh University, Mohali - 140413, Punjab, IN
2 Department of Computer Science and Engineering, Chandigarh Engineering College, Mohali - 140307, Punjab, IN
Source
Indian Journal of Science and Technology, Vol 9, No 11 (2016), Pagination:Abstract
Background/Objectives: There are a lot of information retrieval techniques available for getting information from different kinds of sources. Main aim of this paper is to improve information retrieval activities to a higher level. Different methods for information retrieval have been studied and discussed. It involves use of Fuzzy Ontology Generation framework (FOGA) framework along with Formal Concept Analysis (FCA) based clustering and keyword matching approach. Hidden Markov Model has been used for retrieval of data from search engines in an intelligent and efficient way for correct identification and retrieval from the database. Classification algorithm has been used for detection of community and conversion of large community graph to sub community graph for its better study and usage. Findings: It has been found that fuzzy ontology generation framework can automatically generate the fuzzy ontology which is very hard and time consuming task otherwise. Clustering of data can be done using the technique of formal concept analysis along with keyword matching method. There is a large amount of data available under similar words but with different meanings. So there has been a lot of problem in retrieval of exact data as required in a very short amount of time. In this case Hidden Markov Model can be used which can find the non-observable or hidden stochastic process from the observable stochastic process. Application/Improvements: Generalized Expectation-Maximization algorithm used with Hidden Markov Model can find unknown parameters. By adding frequency tracking algorithm along with Hidden Markov Model, we can also track audible data from a large database. Community detection algorithm along with Informap and Bigclam algorithms used with Hidden Markov Model will increase the modularity of data. Its applications include use of information retrieval of different types of data in an extremely faster and efficient way.Keywords
Classification, Clustering, Community Detection Algorithm, Fuzzy Ontology Generation Framework, Formal Concept Analysis, Hidden Markov Model- Comparative Analysis on Load Balancing Techniques in Cloud Computing
Abstract Views :205 |
PDF Views:0
Authors
Affiliations
1 Department of Computer Science, Chandigarh Engineering College, Landran, Mohali – 140307, Punjab, IN
2 Department of Computer Science, Chandigarh University, Gharuan (Mohali) - 160036, Punjab, IN
1 Department of Computer Science, Chandigarh Engineering College, Landran, Mohali – 140307, Punjab, IN
2 Department of Computer Science, Chandigarh University, Gharuan (Mohali) - 160036, Punjab, IN
Source
Indian Journal of Science and Technology, Vol 9, No 11 (2016), Pagination:Abstract
Background/Objectives: Cloud computing is an arena that is ruling the world of information technology. Every user has its own definition for this technology as per their use. This paper is properly discussed document that describes the complete evolution of cloud computing from its beginning. Findings: With the presence of vast literature in field of load balancing, it was found confusion for the new scholars to find the startup point for their research in this field. Therefore, an exhaustive comparison has been made for the superior understanding of cloud evolution through various proposed algorithms from the past many decades, which will make the researchers possible to analyze the existing scenarios and a better way out to overcome the unsolved queries. Application/Improvements: The assessments between the algorithms will help the new researchers to analyze and opt for the parameters those need much more concentration to meet the required targets for better outcomes in the field.Keywords
Cloud Computing, Information Extraction and Performance Measure, Load Balancing- A Survey on Digital Image Processing Techniques for Tumor Detection
Abstract Views :187 |
PDF Views:0
Authors
Amit Verma
1,
Gayatri Khanna
1
Affiliations
1 Computer Science Engineering, UIE, Chandigarh University, Mohali - 140413, Punjab, IN
1 Computer Science Engineering, UIE, Chandigarh University, Mohali - 140413, Punjab, IN
Source
Indian Journal of Science and Technology, Vol 9, No 14 (2016), Pagination:Abstract
The paper presents a formal review on evolution of the image processing techniques for tumor detection, comparison of the existing techniques to obtain the one which gives the best results for detection and classification of tumor. The scope of the propounded technique is semblance of the gaps by giving effectual results in identifying the tumor.Keywords
Classification of Tumor, Oncologists, Parameters, Region of Interest, Strengths, Tumor Detection- Algorithmic Approach to Data Mining and Classification Techniques
Abstract Views :154 |
PDF Views:0
Authors
Affiliations
1 Department of Computer Science and Engineering, Chandigarh Engineering College, Mohali - 140307, Punjab, IN
1 Department of Computer Science and Engineering, Chandigarh Engineering College, Mohali - 140307, Punjab, IN
Source
Indian Journal of Science and Technology, Vol 9, No 28 (2016), Pagination:Abstract
Objective/Background: This paper highlights the extension of access data to data mining from passing year to recent. Main aim of this paper is comparative study of tools/techniques/algorithms which are used for analysis of huge amount of data. Methods/Statistical Analysis: Different methods of data mining has been studied and discussed which include decision tree, neural network, regression, clustering techniques are implemented on different tools for fraud detection. Different algorithms Adaboost, page rank, K-means used for data mining are also discussed. For generate relevant information from data streams, frequent pattern generation tree algorithm is also implemented and discussed. Findings: Out of so many available algorithms decision tree has been found out to be the most suitable for mining data provided the data is restricted to some thousand of entries. The most prominent feature as its advantage lies in its clear illustration in the form of graphical tree with inherent tree structure capability. However the concern about ambiguity should be carefully dealt with maintains consistency. Applications: For the extraction of the relevant data, data mining is helpful in various ways. The various areas where data mining is being used have also been discussed in the paper. Future Scope: The scope of the paper extends from an exhaustive survey and analysis of all available empirical and conceptual techniques and tools in the area of data mining.Keywords
Association Rule Mining, Classification, Clustering, Data, Data Mining, Decision tree, Neural Network.- Ontology based Retrieval of Components
Abstract Views :254 |
PDF Views:0
Authors
Affiliations
1 Computer Science and Engineering, Chandigarh Engineering College, Landran, Mohali - 140307, Punjab, IN
1 Computer Science and Engineering, Chandigarh Engineering College, Landran, Mohali - 140307, Punjab, IN
Source
Indian Journal of Science and Technology, Vol 9, No 48 (2016), Pagination:Abstract
Objectives: Component based development commences the field of reusing the component rather than redeveloping it.To reuse the component, firstly the retrieval of component through repository has been done. Methods/Statistical analysis: There are many ways to retrieve the component but every scheme has its own pros and cons. To retrieve the best component, the relationships between components must be there. This paper presents the overview of whole process of component retrieval using ontology. Findings: Ontology helps in effective retrieval by providing relationships between concepts (classes), interfaces etc. Application/Improvements: Additionally this paper reviews what the component is, innovation in ontology, what are the reusable assets in component and how the component are formally described using languages are presented.The goal of ontology based retrieval of component is that best component retrieves from the interconnected repositories..Keywords
Algorithm, Ontology Languages, Retrieval.- Detection and Classification of Disease Affected Region of Plant Leaves using Image Processing Technique
Abstract Views :157 |
PDF Views:0
Authors
Affiliations
1 Computer Science and Engineering, Chandigarh Engineering College, Landran - 140307, Punjab, IN
1 Computer Science and Engineering, Chandigarh Engineering College, Landran - 140307, Punjab, IN
Source
Indian Journal of Science and Technology, Vol 9, No 48 (2016), Pagination:Abstract
Objective: There is the existence of a variety of plants on this earth surface that plays enormous role in human life. But various factors are there that can destroy plant growth like weather conditions, non-availability of accurate resources, plant diseases and lack of expert knowledge to care plants. Statistical Analysis: Plant diseases are one of the major factors responsible for the reduction of plant growth. In the ancient years, it was not easy to detect the plant diseases on time. But in this computing era, digital image processing rapidly developed that it can be used for various real life applications. Findings: In this research work, plant leaf diseases are detected and classified using the image processing techniques. The fundamental steps of image processing and leaf disease detection and final optimization are used in this work. Here, image acquisition is performed by considering RGB colour based disease affected leaf image. Image contrast is enhanced using Histogram Equalization. Image segmentation is performed with K means clustering. Image feature extraction is performed to extract the features of leaf disease symptoms by maintaining Grey Level Occurrence Matrices. Support Vector Machine is used for the leaf disease detection & classification and finally ant colony optimization is applied for the optimization of concept. Applications/Improvements: For the experimentation, dataset of plant leaf affected from bacterial disease ‘Bacterial Blight’ and fungal diseases ‘Alternaria alternata’, ‘Fungal Leaf Spot’ and ‘Fungus Anthracnose’ are considered. The proposed concept is also evaluated by comparative analysis with the existing concepts of SVM and Improved SVM.Keywords
Ant Colony Optimization, Histogram Equalization, Image Processing, K-Means Clustering,Plant Leaf Disease Detection, , Support Vector Machine.- Algorithm Design for GIS using BFO and Neural Network Approach for Identification of Groundwater
Abstract Views :131 |
PDF Views:0
Authors
Affiliations
1 Computer Science and Engineering, Chandigarh Engineering College, Landran - 140307, Punjab, IN
1 Computer Science and Engineering, Chandigarh Engineering College, Landran - 140307, Punjab, IN
Source
Indian Journal of Science and Technology, Vol 9, No 48 (2016), Pagination:Abstract
Objectives: Groundwater is an important resource contributing significantly in development of natural life. However, over exploitation has depleted groundwater ratio deliberately and also led to land subsidence at some places. Methods/Statistical Analysis: Groundwater zones are demarcated using remote sensing and Geographic Information System (GIS) techniques. Findings: In this research a definitive methodology is proposed to determine groundwater using integration of BFO and neural network technique. For the training purpose we use fuzzy logic and after that we use optimization techniques to find suitable feature set which can classify more accurate. Applications/Improvements: Finally, it is concluded that the Geoinformatics technology are very efficient and useful for the identification of groundwater detection. We evaluate parameters like kappa coefficient, water level, accuracy of algorithm to detect the water percentage in the region from where we conduct the satellite image through remote sensing. Therefore, this research will be useful for effectual identification of suitable locations for extraction of water.Keywords
BFO, Fuzzy Logic, Ground Water Detection, Kappa Coefficient. Neural Network.- An Intelligent System to Improve Quality of Service in Healthcare Monitoring Over WSN
Abstract Views :151 |
PDF Views:0
Authors
Affiliations
1 Computer Science and Engineering, Chandigarh Engineering College, Landran, Mohali - 140307, Punjab, IN
1 Computer Science and Engineering, Chandigarh Engineering College, Landran, Mohali - 140307, Punjab, IN